69 research outputs found

    Perceptual recognition of familiar objects in different orientations

    Get PDF
    Recent approaches to object recognition have suggested that representations are view-dependent and not object-centred as was previously asserted by Marr (Marr and Nishihara, 1978). The exact nature of these view-centred representations however does not concord across the different theories. Palmer suggested that a single canonical view represents an object in memory (Palmer et al., 1981) whereas other studies have shown that each object may have more than one view-point representation (Tarr and Pinker 1989).A set of experiments were run to determine the nature of the visual representation of rigid, familiar objects in memory that were presented foveally and in peripheral vision. In the initial set of experiments recognition times were measured to a selection of common, elongated objects rotated in increments of 30˚ degrees in the 3 different axes and their combinations. Significant main effects of orientation were found in all experiments. This effect was attributed to the delay in recognising objects when foreshortened. Objects with strong gravitational uprights yielded the same orientation effects as objects without gravitational uprights. Recognition times to objects rotated around the picture plane were found to be independent of orientation. The results were not dependent on practice with the objects. There was no benefit found for shaded objects over silhouetted objects. The findings were highly consistent across the experiments. Four experiments were also carried out which tested the detectability of objects presented foveally among a set of similar objects. The subjects viewed an object picture (target) surrounded by eight search pictures arranged in a circular array. The task was to locate the picture-match of the target object (which was sometimes absent) as fast as possible. All of the objects had prominent elongated axes and were viewed perpendicular to this axis. When the object was present in the search array, it could appear in one of five orientations: in its original orientation, rotated in the picture plane by 30 or 60 , or rotated by 30 or 60 in depth. Highly consistent results were found across the four experiments. It was found that objects rotated in depth by 60 took longer to find and were less likely to be found in the first saccade than all other orientations. These findings were independent of the type of display (i.e. randomly rotated distractors or aligned distractors) and also of the task (matching to a picture or a name of an object). It was concluded that there was no evidence that an abstract 3-dimensional representation was used in searching for an object. The results from these experiments are compatible with the notion of multiple-view representations of objects in memory. There was no evidence found that objects were stored as single, object-centred representations. It was found that representations are initially based on the familiar views of the objects but with practice on other views, those views which hold the maximum information about the object are stored. Novel views of objects are transformed to match these stored views and different candidates for the transformation process are discussed

    Do synaesthesia and mental imagery tap into similar cross-modal processes?

    Get PDF
    Synaesthesia has previously been linked with imagery abilities, although an understanding of a causal role for mental imagery in broader synaesthetic experiences remains elusive. This can be partly attributed to our relatively poor understanding of imagery in sensory domains beyond vision. Investigations into the neural and behavioural underpinnings of mental imagery have nevertheless identified an important role for imagery in perception, particularly in mediating cross-modal interactions. However, the phenomenology of synaesthesia gives rise to the assumption that associated cross-modal interactions may be encapsulated and specific to synaesthesia. As such, evidence for a link between imagery and perception may not generalize to synaesthesia. Here, we present results that challenge this idea: first, we found enhanced somatosensory imagery evoked by visual stimuli of body parts in mirror-touch synaesthetes, relative to other synaesthetes or controls. Moreover, this enhanced imagery generalized to tactile object properties not directly linked to their synaesthetic associations. Second, we report evidence that concurrent experience evoked in grapheme-colour synaesthesia was sufficient to trigger visual-to-tactile correspondences that are common to all. Together, these findings show that enhanced mental imagery is a consistent hallmark of synaesthesia, and suggest the intriguing possibility that imagery may facilitate the cross-modal interactions that underpin synaesthesic experiences. This article is part of a discussion meeting issue 'Bridging senses: novel insights from synaesthesia'

    A Wii Bit of Fun: A Novel Platform to Deliver Effective Balance Training to Older Adults

    Get PDF
    BACKGROUND: Falls and fall-related injuries are symptomatic of an aging population. This study aimed to design, develop, and deliver a novel method of balance training, using an interactive game-based system to promote engagement, with the inclusion of older adults at both high and low risk of experiencing a fall.STUDY DESIGN: Eighty-two older adults (65 years of age and older) were recruited from sheltered accommodation and local activity groups. Forty volunteers were randomly selected and received 5 weeks of balance game training (5 males, 35 females; mean, 77.18 ± 6.59 years), whereas the remaining control participants recorded levels of physical activity (20 males, 22 females; mean, 76.62 ± 7.28 years). The effect of balance game training was measured on levels of functional balance and balance confidence in individuals with and without quantifiable balance impairments.RESULTS: Balance game training had a significant effect on levels of functional balance and balance confidence (P Peer reviewedFinal Published versio

    Haptic recognition memory and lateralisation for verbal and nonverbal shapes

    Get PDF
    Laterality effects generally refer to an advantage for verbal processing in the left hemisphere and for non-verbal processing in the right hemisphere, and are often demonstrated in memory tasks in vision and audition. In contrast, their role in haptic memory is less understood. In this study, we examined haptic recognition memory and laterality for letters and nonsense shapes. We used both upper and lower case letters, with the latter designed as more complex in shape. Participants performed a recognition memory task with the left and right hand separately. Recognition memory performance (capacity and bias-free d') was higher and response times were faster for upper case letters than for lower case letters and nonsense shapes. The right hand performed best for upper case letters when it performed the task after the left hand. This right hand/left hemisphere advantage appeared for upper case letters, but not lower case letters, which also had a lower memory capacity, probably due to their more complex spatial shape. These findings suggest that verbal laterality effects in haptic memory are not very prominent, which may be due to the haptic verbal stimuli being processed mainly as spatial objects without reaching robust verbal coding into memory.Peer reviewe

    The Effect of Combined Sensory and Semantic Components on Audio–Visual Speech Perception in Older Adults

    Get PDF
    Previous studies have found that perception in older people benefits from multisensory over unisensory information. As normal speech recognition is affected by both the auditory input and the visual lip movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio–visual integration is affected by top-down semantic processing. We presented participants with audio–visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio–visual ‘blur’ compared to audio–visual ‘no blur’ condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable

    Laterality effects in the haptic discrimination of verbal and non-verbal shapes

    Get PDF
    The left hemisphere is known to be generally predominant in verbal processing and the right hemisphere in non-verbal processing. We studied whether verbal and non-verbal lateralization is present in haptics by comparing discrimination performance between letters and nonsense shapes. We addressed stimulus complexity by introducing lower case letters, which are verbally identical with upper case letters but have a more complex shape. The participants performed a same-different haptic discrimination task for upper and lower case letters and nonsense shapes with the left and right hand separately. We used signal detection theory to determine discriminability (d '), criterion (c) and we measured reaction times. Discrimination was better for the left hand for nonsense shapes, close to significantly better for the right hand for upper case letters and with no difference between the hands for lower case letters. For lower case letters, right hand showed a strong bias to respond "different", while the left hand showed faster reaction times. Our results are in agreement with the right lateralization for non-verbal material. Complexity of the verbal shape is important in haptics as the lower case letters seem to be processed as less verbal and more as spatial shapes than the upper case letters.Peer reviewe

    Perceptual learning shapes multisensory causal inference via two distinct mechanisms

    Get PDF
    To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source

    Multisensory Perception and Learning: Linking Pedagogy, Psychophysics, and Human–Computer Interaction

    Get PDF
    In this review, we discuss how specific sensory channels can mediate the learning of properties of the environment. In recent years, schools have increasingly been using multisensory technology for teaching. However, it still needs to be sufficiently grounded in neuroscientific and pedagogical evidence. Researchers have recently renewed understanding around the role of communication between sensory modalities during development. In the current review, we outline four principles that will aid technological development based on theoretical models of multisensory development and embodiment to foster in-depth, perceptual, and conceptual learning of mathematics. We also discuss how a multidisciplinary approach offers a unique contribution to development of new practical solutions for learning in school. Scientists, engineers, and pedagogical experts offer their interdisciplinary points of view on this topic. At the end of the review, we present our results, showing that one can use multiple sensory inputs and sensorimotor associations in multisensory technology to improve the discrimination of angles, but also possibly for educational purposes. Finally, we present an application, the ‘RobotAngle’ developed for primary (i.e., elementary) school children, which uses sounds and body movements to learn about angles

    The role of social cues in the deployment of spatial attention: head-body relationships automatically activate directional spatial codes in a Simon task

    Get PDF
    The role of body orientation in the orienting and allocation of social attention was examined using an adapted Simon paradigm. Participants categorized the facial expression of forward facing, computer-generated human figures by pressing one of two response keys, each located left or right of the observers' body midline, while the orientation of the stimulus figure's body (trunk, arms, and legs), which was the task-irrelevant feature of interest, was manipulated (oriented toward the left or right visual hemifield) with respect to the spatial location of the required response. We found that when the orientation of the body was compatible with the required response location, responses were slower relative to when body orientation was incompatible with the response location. In line with a model put forward by Hietanen (1999), this reverse compatibility effect suggests that body orientation is automatically processed into a directional spatial code, but that this code is based on an integration of head and body orientation within an allocentric-based frame of reference. Moreover, we argue that this code may be derived from the motion information implied in the image of a figure when head and body orientation are incongruent. Our results have implications for understanding the nature of the information that affects the allocation of attention for social orienting

    The development of visuotactile congruency effects for sequences of events.

    Get PDF
    Abstract Sensitivity to the temporal coherence of visual and tactile signals increases perceptual reliability and is evident during infancy. However, it is not clear how, or whether, bidirectional visuotactile interactions change across childhood. Furthermore, no study has explored whether viewing a body modulates how children perceive visuotactile sequences of events. Here, children aged 5–7 years (n = 19), 8 and 9 years (n = 21), and 10–12 years (n = 24) and adults (n = 20) discriminated the number of target events (one or two) in a task-relevant modality (touch or vision) and ignored distractors (one or two) in the opposing modality. While participants performed the task, an image of either a hand or an object was presented. Children aged 5–7 years and 8 and 9 years showed larger crossmodal interference from visual distractors when discriminating tactile targets than the converse. Across age groups, this was strongest when two visual distractors were presented with one tactile target, implying a "fission-like" crossmodal effect (perceiving one event as two events). There was no influence of visual context (viewing a hand or non-hand image) on visuotactile interactions for any age group. Our results suggest robust interference from discontinuous visual information on tactile discrimination of sequences of events during early and middle childhood. These findings are discussed with respect to age-related changes in sensory dominance, selective attention, and multisensory processing
    corecore